Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (3) inherent difficulty of long-term forecasting. To address these challenges, we propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow. Specifically, DCRNN captures the spatial dependency using bidirectional random walks on the graph, and the temporal dependency using the encoder-decoder architecture with scheduled sampling. We evaluate the framework on two real-world large scale road network traffic datasets and observe consistent improvement of 12% -15% over state-of-the-art baselines.
translated by 谷歌翻译
Many dynamical systems -- from robots interacting with their surroundings to large-scale multiphysics systems -- involve a number of interacting subsystems. Toward the objective of learning composite models of such systems from data, we present i) a framework for compositional neural networks, ii) algorithms to train these models, iii) a method to compose the learned models, iv) theoretical results that bound the error of the resulting composite models, and v) a method to learn the composition itself, when it is not known a prior. The end result is a modular approach to learning: neural network submodels are trained on trajectory data generated by relatively simple subsystems, and the dynamics of more complex composite systems are then predicted without requiring additional data generated by the composite systems themselves. We achieve this compositionality by representing the system of interest, as well as each of its subsystems, as a port-Hamiltonian neural network (PHNN) -- a class of neural ordinary differential equations that uses the port-Hamiltonian systems formulation as inductive bias. We compose collections of PHNNs by using the system's physics-informed interconnection structure, which may be known a priori, or may itself be learned from data. We demonstrate the novel capabilities of the proposed framework through numerical examples involving interacting spring-mass-damper systems. Models of these systems, which include nonlinear energy dissipation and control inputs, are learned independently. Accurate compositions are learned using an amount of training data that is negligible in comparison with that required to train a new model from scratch. Finally, we observe that the composite PHNNs enjoy properties of port-Hamiltonian systems, such as cyclo-passivity -- a property that is useful for control purposes.
translated by 谷歌翻译
深层神经网络如今成功地拟合了非常复杂的功能,但是对于推理而言,密集的模型开始非常昂贵。为了减轻这种情况,一个有希望的方向是激活网络稀疏子图的网络。该子图是由数据依赖性路由函数选择的,将输入的固定映射到子网(例如,专家(MOE)在开关变压器中的混合物)。但是,先前的工作在很大程度上是经验的,尽管现有的路由功能在实践中效果很好,但它们并没有导致近似能力的理论保证。我们旨在为稀疏网络的力量提供理论解释。作为我们的第一个贡献,我们提出了一个与数据相关的稀疏网络的形式模型,该网络捕获了流行体系结构的显着方面。然后,我们基于局部性敏感哈希(LSH)引入一个路由函数,使我们能够对稀疏网络近似目标函数的方式进行推论。在用我们的模型代表基于LSH的稀疏网络之后,我们证明稀疏网络可以匹配Lipschitz函数上密集网络的近似能力。在输入向量上应用LSH意味着专家在输入空间的不同子区域中插值目标函数。为了支持我们的理论,我们根据Lipschitz的目标功能定义了各种数据集,并且我们表明,稀疏网络在活动数量数量和近似质量之间具有良好的权衡。
translated by 谷歌翻译
我们提出了一种新的算法,用于在几何环境中学习隐藏的马尔可夫模型(HMM)的参数,其中观测值在Riemannian歧管中采用值。特别是,我们提升了一种瞬间算法的二阶方法,该方法将非统一的相关性纳入了更通用的环境,在该环境中,观察结果在非阳性面力的Riemannian对称空间中进行,观察可能性是Riemannian Gaussians。所得算法将其分解为Riemannian高斯混合模型估计算法,然后是一系列凸优化程序。我们通过示例证明,与现有学习者相比,学习者可以显着提高速度和数值准确性。
translated by 谷歌翻译
We aim for image-based novelty detection. Despite considerable progress, existing models either fail or face a dramatic drop under the so-called "near-distribution" setting, where the differences between normal and anomalous samples are subtle. We first demonstrate existing methods experience up to 20% decrease in performance in the near-distribution setting. Next, we propose to exploit a score-based generative model to produce synthetic near-distribution anomalous data. Our model is then fine-tuned to distinguish such data from the normal samples. We provide a quantitative as well as qualitative evaluation of this strategy, and compare the results with a variety of GAN-based models. Effectiveness of our method for both the near-distribution and standard novelty detection is assessed through extensive experiments on datasets in diverse applications such as medical images, object classification, and quality control. This reveals that our method considerably improves over existing models, and consistently decreases the gap between the near-distribution and standard novelty detection performance. The code repository is available at https://github.com/rohban-lab/FITYMI.
translated by 谷歌翻译
联合时频散射(JTFS)是时频域中的卷积算子,以各种速率和尺度提取光谱调制。它提供了原发性听觉皮层中光谱接收场(STRF)的理想化模型,因此可以作为孤立音频事件规模的人类感知判断的生物学合理替代物。然而,JTFS和STRF的先前实现仍然不在音频生成的知觉相似性度量和评估方法的标准工具包中。我们将此问题追溯到三个局限性:不同的性能,速度和灵活性。在本文中,我们提出了Python中时间频率散射的实现。与先前的实现不同,我们的将Numpy,Pytorch和Tensorflow作为后端可容纳,因此可以在CPU和GPU上移植。我们通过三个应用说明了JTF的有用性:光谱调制的无监督流形学习,乐器的监督分类以及生物声音的质地重新合成。
translated by 谷歌翻译
Riemannian Gaussian distributions were initially introduced as basic building blocks for learning models which aim to capture the intrinsic structure of statistical populations of positive-definite matrices (here called covariance matrices). While the potential applications of such models have attracted significant attention, a major obstacle still stands in the way of these applications: there seems to exist no practical method of computing the normalising factors associated with Riemannian Gaussian distributions on spaces of high-dimensional covariance matrices. The present paper shows that this missing method comes from an unexpected new connection with random matrix theory. Its main contribution is to prove that Riemannian Gaussian distributions of real, complex, or quaternion covariance matrices are equivalent to orthogonal, unitary, or symplectic log-normal matrix ensembles. This equivalence yields a highly efficient approximation of the normalising factors, in terms of a rather simple analytic expression. The error due to this approximation decreases like the inverse square of dimension. Numerical experiments are conducted which demonstrate how this new approximation can unlock the difficulties which have impeded applications to real-world datasets of high-dimensional covariance matrices. The paper then turns to Riemannian Gaussian distributions of block-Toeplitz covariance matrices. These are equivalent to yet another kind of random matrix ensembles, here called "acosh-normal" ensembles. Orthogonal and unitary "acosh-normal" ensembles correspond to the cases of block-Toeplitz with Toeplitz blocks, and block-Toeplitz (with general blocks) covariance matrices, respectively.
translated by 谷歌翻译
Neural ordinary differential equations (NODEs) -- parametrizations of differential equations using neural networks -- have shown tremendous promise in learning models of unknown continuous-time dynamical systems from data. However, every forward evaluation of a NODE requires numerical integration of the neural network used to capture the system dynamics, making their training prohibitively expensive. Existing works rely on off-the-shelf adaptive step-size numerical integration schemes, which often require an excessive number of evaluations of the underlying dynamics network to obtain sufficient accuracy for training. By contrast, we accelerate the evaluation and the training of NODEs by proposing a data-driven approach to their numerical integration. The proposed Taylor-Lagrange NODEs (TL-NODEs) use a fixed-order Taylor expansion for numerical integration, while also learning to estimate the expansion's approximation error. As a result, the proposed approach achieves the same accuracy as adaptive step-size schemes while employing only low-order Taylor expansions, thus greatly reducing the computational cost necessary to integrate the NODE. A suite of numerical experiments, including modeling dynamical systems, image classification, and density estimation, demonstrate that TL-NODEs can be trained more than an order of magnitude faster than state-of-the-art approaches, without any loss in performance.
translated by 谷歌翻译
我们提出了一种降低概率图形模型中普遍存在的吉布斯(Boltzmann)分布的分区功能(标准化常数)的计算复杂性的新方法。 Gibbs分布的实际应用的主要障碍是需要估计其分区功能。在解决该问题的情况下,本领域的状态是多级算法,其包括冷却时间表,以及时间表的每个步骤中的平均估计器。虽然这些算法中的冷却时间表是自适应的,但平均估计计算使用MCMC作为黑盒以绘制近似样本。我们开发了一种双重自适应方法,将自适应冷却时间与自适应MCMC平均估计器相结合,其数量的马尔可夫链步骤动态地适应下面的链条。通过严格的理论分析,我们证明了我们的方法在几个因素中优于最新的技术算法:(1)我们方法的计算复杂性较小; (2)我们的方法对混合时间的松散界限敏感,这些算法中的固有组成部分; (3)我们方法获得的改进在高精度估计的最具挑战性方案中特别显着。我们展示了我们在经典因素图中运行的实验中的方法的优势,例如投票模型和ising模型。
translated by 谷歌翻译
移动和可穿戴设备已启用许多应用,包括活动跟踪,健康监测和人机互动,可衡量和改善我们的日常生活。通过利用许多移动设备和可穿戴设备中的低功耗传感器的丰富集合来执行人类活动识别(HAR),可以实现许多这些应用。最近,深入学习大大推动了哈尔的界限,在移动和可穿戴设备上。本文系统地对现有的工作进行了分类,并总结了为可穿戴性的哈尔引入深度学习方法的现有工作,并为目前的进步,发展趋势和主要挑战提供了全面的分析。我们还展示了深度学习的哈尔的前沿前沿和未来方向。
translated by 谷歌翻译